11 research outputs found
Complex Logical Reasoning over Knowledge Graphs using Large Language Models
Reasoning over knowledge graphs (KGs) is a challenging task that requires a
deep understanding of the complex relationships between entities and the
underlying logic of their relations. Current approaches rely on learning
geometries to embed entities in vector space for logical query operations, but
they suffer from subpar performance on complex queries and dataset-specific
representations. In this paper, we propose a novel decoupled approach,
Language-guided Abstract Reasoning over Knowledge graphs (LARK), that
formulates complex KG reasoning as a combination of contextual KG search and
abstract logical query reasoning, to leverage the strengths of graph extraction
algorithms and large language models (LLM), respectively. Our experiments
demonstrate that the proposed approach outperforms state-of-the-art KG
reasoning methods on standard benchmark datasets across several logical query
constructs, with significant performance gain for queries of higher complexity.
Furthermore, we show that the performance of our approach improves
proportionally to the increase in size of the underlying LLM, enabling the
integration of the latest advancements in LLMs for logical reasoning over KGs.
Our work presents a new direction for addressing the challenges of complex KG
reasoning and paves the way for future research in this area.Comment: Code available at https://github.com/Akirato/LLM-KG-Reasonin
Hyperbolic Graph Neural Networks at Scale: A Meta Learning Approach
The progress in hyperbolic neural networks (HNNs) research is hindered by
their absence of inductive bias mechanisms, which are essential for
generalizing to new tasks and facilitating scalable learning over large
datasets. In this paper, we aim to alleviate these issues by learning
generalizable inductive biases from the nodes' local subgraph and transfer them
for faster learning over new subgraphs with a disjoint set of nodes, edges, and
labels in a few-shot setting. We introduce a novel method, Hyperbolic GRAph
Meta Learner (H-GRAM), that, for the tasks of node classification and link
prediction, learns transferable information from a set of support local
subgraphs in the form of hyperbolic meta gradients and label hyperbolic
protonets to enable faster learning over a query set of new tasks dealing with
disjoint subgraphs. Furthermore, we show that an extension of our meta-learning
framework also mitigates the scalability challenges seen in HNNs faced by
existing approaches. Our comparative analysis shows that H-GRAM effectively
learns and transfers information in multiple challenging few-shot settings
compared to other state-of-the-art baselines. Additionally, we demonstrate
that, unlike standard HNNs, our approach is able to scale over large graph
datasets and improve performance over its Euclidean counterparts.Comment: Accepted to NeurIPS 2023. 14 pages of main paper, 5 pages of
supplementar
A Unification Framework for Euclidean and Hyperbolic Graph Neural Networks
Hyperbolic neural networks are able to capture the inherent hierarchy of
graph datasets, and consequently a powerful choice of GNNs. However, they
entangle multiple incongruent (gyro-)vector spaces within a layer, which makes
them limited in terms of generalization and scalability. In this work, we
propose to use Poincar\'e disk model as our search space, and apply all
approximations on the disk (as if the disk is a tangent space derived from the
origin), and thus getting rid of all inter-space transformations. Such an
approach enables us to propose a hyperbolic normalization layer, and to further
simplify the entire hyperbolic model to a Euclidean model cascaded with our
hyperbolic normalization layer. We applied our proposed nonlinear hyperbolic
normalization to the current state-of-the-art homogeneous and multi-relational
graph networks. We demonstrate that not only does the model leverage the power
of Euclidean networks such as interpretability and efficient execution of
various model components, but also it outperforms both Euclidean and hyperbolic
counterparts in our benchmarks
Twitter corpus of Resource-Scarce Languages for Sentiment Analysis and Multilingual Emoji Prediction
This dataset is created by leveraging the social media platforms such as twitter for developing corpus across multiple languages. The corpus creation methodology is applicable for resource-scarce languages provided the speakers of that particular language are active users on social media platforms. We present an approach to extract social media microblogs such as tweets (Twitter). We created corpus for multilingual sentiment analysis and emoji prediction in Hindi, Bengali and Telugu. Further, we perform and analyze multiple NLP tasks utilizing the corpus to get interesting observations